Goto

Collaborating Authors

 Americus


Evaluating Loss Functions for Graph Neural Networks: Towards Pretraining and Generalization

Abbas, Khushnood, Hou, Ruizhe, Wengang, Zhou, Shi, Dong, Ling, Niu, Nan, Satyaki, Abbasi, Alireza

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) became useful for learning on non-Euclidean data. However, their best performance depends on choosing the right model architecture and the training objective, also called the loss function. Researchers have studied these parts separately, but a large-scale evaluation has not looked at how GNN models and many loss functions work together across different tasks. To fix this, we ran a thorough study - it included seven well-known GNN architectures. We also used a large group of 30 single plus mixed loss functions. The study looked at both inductive and transductive settings. Our evaluation spanned three distinct real-world datasets, assessing performance in both inductive and transductive settings using 21 comprehensive evaluation metrics. From these extensive results (detailed in supplementary information 1 \& 2), we meticulously analyzed the top ten model-loss combinations for each metric based on their average rank. Our findings reveal that, especially for the inductive case: 1) Hybrid loss functions generally yield superior and more robust performance compared to single loss functions, indicating the benefit of multi-objective optimization. 2) The GIN architecture always showed the highest-level average performance, especially with Cross-Entropy loss. 3) Although some combinations had overall lower average ranks, models such as GAT, particularly with certain hybrid losses, demonstrated incredible specialized strengths, maximizing the most top-1 results among the individual metrics, emphasizing subtle strengths for particular task demands. 4) On the other hand, the MPNN architecture typically lagged behind the scenarios it was tested against.


The Rise Of Machines That Think

#artificialintelligence

This week's milestones in the history of technology include the end of life of one of the first examples of artificial intelligence or "giant brains" and its 50th anniversary, patents for the transistor, xerography, and carbon paper, and the first solar-powered mobile phone. At 11:45pm, the power to the Electronic Numerical Integrator and Computer (ENIAC), is removed. For a few years after it started calculating in 1946, it was "the only fully electronic computer working in the U.S." Thomas Haigh, Mark Priestley and Crispin Rope write in ENIAC in Action: Making and Remaking the Modern Computer: Since 1955, When ENIAC punched its last card, its prominence has only grown… ENIAC was as much symbol as machine, producing cultural meanings as well as numbers… In its own small way, ENIAC has returned frequently to the forefront of public awareness over the decades as a symbol of a variety of virtues and vices. Among other things, the ENIAC was a symbol of the computer as a giant brain (see October 8 entry below), giving rise to today's warnings that artificial intelligence "will be able to do everything better than us." Walter H. Brattain and John Bardeen are granted a patent for a three-electrode circuit element utilizing semiconductive materials, otherwise known as the transistor.


Calendar of Events

AAAI,

AI Magazine

(ICKEDS 2004). GECAD--Knowledge Engineering and ICINCO Secretariat Decision Support Research Group Escola Superior de Tecnologia de Setubal Rua Dr. Antonio Bernardino Almeida / Campus do IPS


Calendar of Events

AAAI,

AI Magazine

All accepted papers will appear in the conference proceedings published by AAAI Press. Selected authors will be invited to submit extended versions of their Ingrid Russell, University of Hartford papers to a special issue of the International Journal on Artificial Intelligence Tools irussell@hartford.edu The papers Valerie Barr, Hofstra University should not exceed 5 pages and is due by October 24, 2003. All submissions will be done Zdravko Markov, Central Connecticut State electronically via FLAIRS web submission system, which will be available through University the conference website. Please consult the conference web page for details on paper submission.



Calendar of Events

AAAI,

AI Magazine

Send applications and inquiries to May Cheh; National Library of Medicine, 8600 Rockville Pike, Mail Stop 54, Bethesda, MD 20894-6075; Email: cheh@nlm.nih.gov